quantifiable neural distribution alignment
Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable Neural Distribution Alignment
Distribution alignment has many applications in deep learning, including domain adaptation and unsupervised image-to-image translation. Most prior work on unsupervised distribution alignment relies either on minimizing simple non-parametric statistical distances such as maximum mean discrepancy or on adversarial alignment. However, the former fails to capture the structure of complex real-world distributions, while the latter is difficult to train and does not provide any universal convergence guarantees or automatic quantitative validation procedures. In this paper, we propose a new distribution alignment method based on a log-likelihood ratio statistic and normalizing flows. We show that, under certain assumptions, this combination yields a deep neural likelihood-based minimization objective that attains a known lower bound upon convergence. We experimentally verify that minimizing the resulting objective results in domain alignment that preserves the local structure of input domains.
Review for NeurIPS paper: Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable Neural Distribution Alignment
Weaknesses: - Central parts of the paper are unclear eg. in line 80 \log P_M (X; \theta) should be the negative cross entropy. The only quantitative results are on adaptation from USPS to MNIST in line 268. However, prior work [1] achieves 96.5% accuracy in comparison to the 55% accuracy achieved by the proposed method. It would be desirable to evaluate the proposed approach on the more complex Facades/Maps/Cityscapes using the MSE metric to facilitate comparison with AlignFlow and [1]. It is unclear how the inductive bias from each of the datasets influence the shared space.
Review for NeurIPS paper: Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable Neural Distribution Alignment
After discussion, all reviewers, and the meta-reviewer, agree that the paper should be accepted. As the authors show, the method in its current form may not scale well to higher dimensions. While a method without this limitation would obviously be preferable, the reviewers agree that this limitation can be addressed in future work, where the connection with GANs that the authors establish may be helpful.
Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable Neural Distribution Alignment
Distribution alignment has many applications in deep learning, including domain adaptation and unsupervised image-to-image translation. Most prior work on unsupervised distribution alignment relies either on minimizing simple non-parametric statistical distances such as maximum mean discrepancy or on adversarial alignment. However, the former fails to capture the structure of complex real-world distributions, while the latter is difficult to train and does not provide any universal convergence guarantees or automatic quantitative validation procedures. In this paper, we propose a new distribution alignment method based on a log-likelihood ratio statistic and normalizing flows. We show that, under certain assumptions, this combination yields a deep neural likelihood-based minimization objective that attains a known lower bound upon convergence.